visionOS

RSS for tag

Discuss developing for spatial computing and Apple Vision Pro.

Posts under visionOS tag

200 Posts

Post

Replies

Boosts

Views

Activity

How to speed up build time when placing large USDZ files in RCP scenes
I’m currently developing a visionOS app that includes an RCP scene with a large USDZ file (around 2GB). Each time I make adjustments to the CG model in Blender, I export it as USDZ again, place it in the RCP scene, and then build the app using Xcode. However, because the USDZ file is quite large, the build process takes a long time, significantly slowing down my development speed. For example, I’d like to know if there are any effective ways to: Improve overall build performance Reduce the time between updating the USDZ file and completing the build Any advice or best practices for optimizing this workflow would be greatly appreciated. Best regards, Sadao
1
0
104
2h
Xcode shows alert about unknown com.apple.quicklook.preview extension point when running on Apple Vision Pro Simulator
I have an iOS app with a QuickLook extension. I also added Apple Vision Pro in the target's General > Supported Destinations section. About one year ago, I was able to run the app on iPhone, iPad and Apple Vision Pro Simulators. Today I tried running it again on Apple Vision Pro with Xcode 26.0.1, but Xcode shows this error: Try again later. Appex bundle at ~/Library/Developer/CoreSimulator/Devices/F6B3CCA8-82FA-485F-A306-CF85FF589096/data/Library/Caches/com.apple.mobile.installd.staging/temp.PWLT59/extracted/problem.app/PlugIns/problemQuickLook.appex with id org.example.problem.problemQuickLook specifies a value (com.apple.quicklook.preview) for the NSExtensionPointIdentifier key in the NSExtension dictionary in its Info.plist that does not correspond to a known extension point. I tried again later a couple times, even after running Clean Build Folder Immediately, without any change. I can reproduce this with a fresh Xcode project to which I add a Quick Look Preview Extension and Apple Vision Pro as a supported destination. The error doesn't happen when running on Apple Vision Pro (Designed for iPad) or iPad Pro 13-inch (M4) destinations. What is the problem? I created FB20448815.
5
0
231
12h
Why don’t the dinosaurs in “Encounter Dinosaurs” respond to real-world light intensity?
I have a question about Apple’s preinstalled visionOS app “Encounter Dinosaurs.” In this app, the dinosaurs are displayed over the real-world background, but the PhysicallyBasedMaterial (PBM) in RealityKit doesn’t appear to respond to the actual brightness of the environment. Even when I change the lighting in the room, the dinosaurs’ brightness and shading remain almost the same. If this behavior is intentional — for example, if the app disables real-world lighting influence or uses a fixed lighting setup — could someone explain how and why it’s implemented that way?
1
0
513
16h
[WebXR] Support for AR module in VisionOS 2.x
Thank you again for pushing the web forward in VisionOS 2, super exciting! The latest WWDC24 video touched on VR experiences for VisionOS2.0 using WebXR, however there was no mention of passthrough AR experiences. Samples such as this one are not supported: https://immersive-web.github.io/webxr-samples/immersive-ar-session.html In Settings > Safari, there is a feature flag for the AR WebXR module, but enabling it did not seem to change anything. Is this the expected behavior at this time? Any developer preview(s) we could try?
10
6
2.3k
1d
Game Controller Input Limitations in visionOS Volumetric Windows - Need Clarification
Game Controller Input Limitations in visionOS Volumetric Windows Hello Apple Developer Community, I'm developing a game for visionOS and have encountered significant limitations with game controller input when using volumetric windows (WindowGroup with .volumetric style). I'd appreciate clarification on whether this is expected behavior and any guidance on best practices. 🧩 Issue Summary When using a DualSense controller with a volumetric window in visionOS, only a subset of controller inputs are available to the app. The remaining inputs appear to be reserved by the system for UI navigation. ✅ Working Inputs (Volumetric Window) D-Pad (all directions) L3 (left thumbstick button click) R3 (right thumbstick button click) Menu button Options button ❌ Not Working Inputs (Volumetric Window) Left thumbstick analog movement (used for UI scrolling instead) Right thumbstick analog movement (used for UI scrolling instead) Face buttons (Cross, Circle, Square, Triangle / A, B, X, Y) Shoulder buttons (L1, R1) Triggers (L2, R2) Key observation: When moving the left thumbstick in a volumetric window, the window's UI scrolls vertically instead of sending input to my app's GameController handlers. Similarly, face buttons seem to be reserved for system UI interactions. ⚙️ Implementation Details I'm using the standard GameController framework: Connect to controller via GCController.controllers() Access extendedGamepad profile Set up valueChangedHandler and pressedChangedHandler for all inputs Handlers confirmed registered via logging Working inputs (D-Pad, L3, R3) trigger immediately and consistently Non-working inputs (thumbsticks, face buttons) never trigger 🧠 Critical Finding: ImmersiveSpace Works Perfectly When testing the exact same code in an ImmersiveSpace (.mixed immersion style), all controller inputs work perfectly: ✅ Both thumbsticks provide full analog input ✅ All face buttons trigger their handlers ✅ All shoulder buttons and triggers work correctly ✅ 100% success rate with no intermittent issues This suggests the issue isn't with my code, but rather how visionOS handles controller input differently between Volumetric Windows and ImmersiveSpace. 🧪 Test Environment I created a minimal test project (Controller-Playground) to isolate the issue: A simple ControllerTester class that registers all GameController handlers A visual UI showing real-time input state No game logic, RealityKit physics, or other complexity Results In volumetric window: Only D-Pad, L3, R3, Menu, Options work In ImmersiveSpace: All inputs work perfectly This confirms the limitation exists at the visionOS platform level, not in app code. 🧰 Attempted Workarounds I tried the following without success: Setting GCSupportsControllerUserInteraction = false in Info.plist Setting UIRequiresFullScreen = true Changing window styles (.plain, .volumetric) Polling vs. handler-based input approaches Various threading models (MainActor, separate thread) Result: The only way to enable full controller support is to switch to ImmersiveSpace. ❓ Questions for Apple Is this input reservation behavior in volumetric windows intended and documented? Are game controllers expected to have limited functionality in volumetric windows while full functionality is reserved for ImmersiveSpace? Is there a way to request full controller input access in a volumetric window, or is ImmersiveSpace the only option for complete controller support? Where can I find official documentation about controller input differences between window types? Are there any APIs or configuration options to disable system controller shortcuts in volumetric windows? 🎯 Impact This limitation has a significant effect on game design and architecture: Volumetric windows offer a multitasking-friendly, less immersive experience ImmersiveSpace provides full controller support but may be more immersive than some games require Games that only need basic D-Pad and button input can work fine in volumetric windows Games requiring analog sticks or face buttons must currently use ImmersiveSpace It would be very helpful if Apple could clarify or reference existing documentation regarding controller input handling in different visionOS window types. If such documentation doesn't exist yet, it might be valuable to include this information in future developer guides or best-practice documents. 🕹 Current Workaround For now, I'm using: D-Pad for character movement (digital 8-direction) R3 (right stick click) as a substitute for the "X" button This setup allows the game to function within a volumetric window, though full controller support still requires ImmersiveSpace. 📄 Request If this is expected behavior, I may have simply missed the relevant documentation — could you please point me to any existing resources that explain this design? If there isn't one yet, it would be great if future visionOS documentation could: Clearly outline controller input behavior across window types Provide guidance on when to use Volumetric Windows vs. ImmersiveSpace for games Consider adding an API option to request full controller access when appropriate If this is not expected behavior, I'm happy to file a detailed bug report with sample code. 💻 System Information visionOS: Latest Simulator Xcode: Latest version Controller: Sony DualSense Framework: GameController (standard extendedGamepad profile) Test project: Minimal reproducible example available Thank you for any clarification or guidance you can provide. This information would be valuable for many developers working on visionOS games.
1
0
476
5d
Carousel Flicker
Hi, I've created an carousel using an Tabview component and it is divided in pages, each page has 7 items. Now we wan't to load the carousel page by page, but whenever I updated the list which Tabview uses to render, it flickers because it rerenders previous elements. I've tried to use a scroll view for this but this app is for Vision Pro and I added some 3D transformations (position/rotation) to the cards from the tabview and when I added this in the scrollview they becom unclickable. Do you have any sugestions what can I do? Bellow is a sample of the code, I put [items] just to suggest there is a list. VStack (spacing: 45) { TabView(selection: $navigationModel.carouselSelectedPage) { let orderedArtist = [items] let numberOfPages = Int((Double(orderedArtist.count) / Double(cardsPerPage)).rounded(.up)) if(numberOfPages != 1){ Spacer().tag(-1) } ForEach(0..<numberOfPages, id: \.self) { page in LazyHStack(alignment: .top, spacing: 16){ ForEach(0..<cardsPerPage, id: \.self) { index in if(page * cardsPerPage + index < orderedArtist.count){ let item = orderedArtist[page * cardsPerPage + index] GeometryReader{proxy in
2
0
354
6d
Persistent Entity Position
I want to let users place 2D/3D “artworks” on detected walls and have them reappear in exactly the same real‑world spot after quitting and relaunching the app (like widgets do, but for my own entities).Environment: Xcode 26, visionOS 2.0, RealityKit + ARKitSession/WorldTrackingProvider Entities are parented to a holder that’s aligned to a wall via plane/mesh raycasts. What I’ve tried: Create a WorldAnchor at placement, save UUID + full 4×4 transform On next launch, re-create the WorldAnchor (or set the saved transform) and attach the entity Gate restore on relocalization/mesh updates and disable all raycast/search after restore Issue: After relaunch, placement still resolves relative to current device pose, not the same wall position. Questions: Is there a public API in visionOS 2.0 to persist app‑managed world anchors across sessions (room‑fixed), e.g., AnchorStore or equivalent? If not, what’s the recommended pattern to reliably restore wall‑anchored content? Are persistence features mentioned for widgets/windows available to third‑party RealityKit entities?
1
0
185
6d
visionOS – Starting GroupActivity FaceTime Call dismisses Immersive Space
Hello, I am in the process of implementing SharePlay support in my visionOS app. Everything runs fine when I test locally, but when my app is distributed via TestFlight, calling try await activity.activate() shows the SharePlay dialog as usual, but then when I start a new FaceTime call, my ImmersiveSpace gets dismissed. This is only happening when the app is distributed via TestFlight, when I run it locally the ImmersiveSpace stays active as expected. Looking at the console on my Mac I found this log: Invalid initial client settings class: UIApplicationSceneClientSettings; expected class: MRUISharedApplicationSceneClientSettings; bundle ID: com.apple.facetime; scene ID: com.apple.facetime:SFBSystemService-DDA8C751-C0C4-487E-AD85-59EF4E6C6050 Does anyone have an idea how I can fix this? It's driving me nuts and I wasted over a day looking for a workaround but so far been unsuccessful. Thanks!
6
1
860
1w
visionOS + Unity PolySpatial: Is 15,970 MeshFilters the True Upper Limit for Industrial Scenes?
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total) Project Context & Research Goals I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with: Production line equipment Many fragmented grid objects need to be merged.) Dynamic product racks (state-switchable assets) Animated worker avatars To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold: Key Finding When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests: PolySpatial’s mirroring mechanism effectively doubles GameObject overhead An apparent hard limit exists around ~8k active mesh objects in practice Objectives for This Discussion Verify if others have encountered similar limits with PolySpatial/RealityKit Understand whether this is a: Memory constraint (per-app allocation) Render pipeline limit (Metal draw calls) Unity-specific PolySpatial behavior Explore optimization strategies beyond brute-force object reduction Why This Matters Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team: Design safer content guidelines Prioritize GPU instancing/LOD investments Potentially contribute back to PolySpatial’s optimization I’d appreciate insights from engineers who’ve: Pushed similar large-scale scenes in visionOS Worked around PolySpatial’s cloning overhead Discovered alternative capacity limits (vertices/draw calls)
4
0
606
1w
WebPage "older version of your browser"
I have a visionOS app using Apple's WebView and WebPage to display web content. When viewing a live YouTube stream last night, YouTube put up the warning in the area that would have the chat window: Oh no! It looks like you're using an older version of your browser. Please update it to use live chat. Anyone know if YouTube is generating this from the server based on the WebPage's user agent string, from Javascript running in the browser engine, or something else? Anyone know if and how it is possible to resolve this? (See right side of YouTube web page from a screen grab):
0
0
158
1w
Triangle Recommendation Limit for M5 Vision Pro
For the M2 Apple Vision Pro, there's "a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space." --https://developer.apple.com/videos/play/wwdc2024/10186/?time=147 Is there a revised recommendation for the M5 Apple Vision Pro?
0
1
224
1w
Custom Material half visible..?
I'm currently implementing 180° / 360° immersive video for my app. I easily implemented 360° by just applying VideoMaterial to flipped sphere. But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear. Would there be any advice / information / idea to implement this? Your help would be grateful.
0
0
64
1w
Material showing only half?
Hi, I'm currently implementing 180° / 360° property for immersive video in my app. I was able to implement 360° easily by just giving VideoMaterial to flipped sphere. However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere. But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
0
0
38
1w
ManipulationComponent Not Translating using indirect input
When using the new RealityKit Manipulation Component on Entities, indirect input will never translate the entity - no matter what settings are applied. Direct manipulation works as expected for both translation and rotation. Is this intended behaviour? This is different from how indirect manipulation works on Model3D. How else can we get translation from this component? visionOS 26 Beta 2 Build from macOS 26 Beta 2 and Xcode 26 Beta 2 Attached is replicable sample code, I have tried this in other projects with the same results. var body: some View { RealityView { content in // Add the initial RealityKit content if let immersiveContentEntity = try? await Entity(named: "MovieFilmReel", in: reelRCPBundle) { ManipulationComponent.configureEntity(immersiveContentEntity, allowedInputTypes: .all, collisionShapes: [ShapeResource.generateBox(width: 0.2, height: 0.2, depth: 0.2)]) immersiveContentEntity.position.y = 1 immersiveContentEntity.position.z = -0.5 var mc = ManipulationComponent() mc.releaseBehavior = .stay immersiveContentEntity.components.set(mc) content.add(immersiveContentEntity) } } }
11
3
938
1w
Automating pickerWheels in VisionOS
Hi! I am learning Swift and UIKit for work. I am trying to automate using a pickerWheel in VisionOS, but since .adjust(toValue: ) was removed in VisionOS's API, I am absolutely struggling to find a way to set a pickerWheel to a specific value. Currently, my solution is to calculate the amount of times I would need to increment/decrement the wheel to get from the current value to the desired value, then do so one at a time. However, this currently does not work, as .accessibilityIncrement() and .accessibilityDecrement() do not work, and .swipeUp() and .swipeDown() go too far. What can I do? Note: I am not a frontend engineer, so while solutions may exist that involve changes to the frontend, I would much rather try and get the frontend we do have to work as is.
1
0
45
1w